Goto

Collaborating Authors

 ai system


This Defense Company Made AI Agents That Blow Things Up

WIRED

Scout AI is using technology borrowed from the AI industry to power lethal weapons--and recently demonstrated its explosive potential. Like many Silicon Valley companies today, Scout AI is training large AI models and agents to automate chores. The big difference is that instead of writing code, answering emails, or buying stuff online, Scout AI's agents are designed to seek and destroy things in the physical world with exploding drones. In a recent demonstration, held at an undisclosed military base in central California, Scout AI's technology was put in charge of a self-driving off-road vehicle and a pair of lethal drones. The agents used these systems to find a truck hiding in the area, and then blew it to bits using an explosive charge.



Why are experts sounding the alarm on AI risks?

Al Jazeera

Why are experts sounding the alarm on AI risks? In recent months, artificial intelligence has been in the news for the wrong reasons: use of deepfakes to scam people, AI systems used to manipulate cyberattacks, and chatbots encouraging suicides, among others. Experts are already warning against technology going out of control. Researchers with some of the most prominent AI companies have quit their jobs in recent weeks and publicly sounded the alarm about fast-paced technological development posing risks to society. But the recent slew of public resignations by those tasked with ensuring AI remains safe for humanity is making conversations around how to regulate the technology and slow its development more urgent, even as billions are being generated in AI investments.


Reports of the Association for the Advancement of Artificial Intelligence's 2025 Fall Symposium Series

Interactive AI Magazine

The Association for the Advancement of Artificial Intelligence's 2025 Fall Symposium Series was held November 6-8, 2025, at the Westin Arlington Gateway in Arlington, Virginia. There were six symposia in the program: AI for Social Good: Emerging Methods, Measures, Data, and Ethics; AI Trustworthiness and Risk Assessment for Challenged Contexts; Engineering Safety-Critical AI Systems; First AAAI Symposium on Quantum Information and Machine Learning: Bridging Quantum Computing and Artificial Intelligence; Safe, Ethical, Certified, Uncertainty-aware, Robust, and Explainable AI for Health; and Unifying Representations for Robot Application Development. This report contains summaries of the symposia, which were submitted by most, but not all, of the symposium organizers. AI has demonstrated transformative potential across sectors such as aging, combating information manipulation, disaster response, education, environmental sustainability, government, healthcare, social care, transportation, and urban planning. Yet, the systematic development of AI For Social Good remains fragmented across those many research communities, with limited convergence around effective methodologies, equitable impact measurement, or access to important data and long-term engagement with targeted populations. The main objective for this symposium was to convene across disciplines and engage researchers, practitioners, and policymakers, with a particular focus on finding methods, measures and data that could be used in multiple settings. There were roughly 30 participants.


AI Is Here to Replace Nuclear Treaties. Scared Yet?

WIRED

AI Is Here to Replace Nuclear Treaties. The last major nuclear arms treaty between the US and Russia just expired. Some experts believe a combination of satellite surveillance, AI, and human reviewers can take its place. For half a century, the world's nuclear powers relied on an intricate and complex series of treaties that slowly and steadily reduced the number of nuclear weapons on the planet. Those treaties are gone now, and it doesn't appear that they'll be coming back anytime soon.


AI is coming to Olympic judging: what makes it a game changer?

AIHub

AI is coming to Olympic judging: what makes it a game changer? As the International Olympic Committee (IOC) embraces AI-assisted judging, this technology promises greater consistency and improved transparency. Yet research suggests that trust, legitimacy, and cultural values may matter just as much as technical accuracy. In 2024, the IOC unveiled its Olympic AI Agenda, positioning artificial intelligence as a central pillar of future Olympic Games. This vision was reinforced at the very first Olympic AI Forum, held in November 2025, where athletes, federations, technology partners, and policymakers discussed how AI could support judging, athlete preparation, and the fan experience.



AI Could Reshape Clinical Trials--and the Business of Pharma

TIME - Tech

Welcome back to, TIME's new twice-weekly newsletter about AI. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? We hear a lot about how AI is accelerating drug discovery. But the number of drugs approved by the FDA has remained constant through the AI revolution, at around 50 per year. "The biggest problem in bringing new medicine to patients hasn't been drug discovery for a long time," says Ben Liu, the founder and CEO of Formation Bio, an AI company working in the biotech space.


Moltbook Is a Social Network for AI Bots. Here's How It Works

TIME - Tech

Moltbook Is a Social Network for AI Bots. Pillay is an editorial fellow at TIME. In this photo illustration, a smartphone displays the Moltbook website homepage. In this photo illustration, a smartphone displays the Moltbook website homepage. Pillay is an editorial fellow at TIME.


'Deepfakes spreading and more AI companions': seven takeaways from the latest artificial intelligence safety report

The Guardian

The international AI safety report warns systems are improving rapidly - but remain prone to'hallucinations' and hard to control. The international AI safety report warns systems are improving rapidly - but remain prone to'hallucinations' and hard to control. The International AI Safety report is an annual survey of technological progress and the risks it is creating across multiple areas, from deepfakes to the jobs market. Commissioned at the 2023 global AI safety summit, it is chaired by the Canadian computer scientist Yoshua Bengio, who describes the "daunting challenges" posed by rapid developments in the field. The report is also guided by senior advisers, including Nobel laureates Geoffrey Hinton and Daron Acemoglu.